Adversarial attacks by attaching noise markers on the face against deep face recognition
نویسندگان
چکیده
منابع مشابه
Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks
Deep neural network (DNN) architecture based models have high expressive power and learning capacity. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of representation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms q...
متن کاملFace Recognition by Cognitive Discriminant Features
Face recognition is still an active pattern analysis topic. Faces have already been treated as objects or textures, but human face recognition system takes a different approach in face recognition. People refer to faces by their most discriminant features. People usually describe faces in sentences like ``She's snub-nosed'' or ``he's got long nose'' or ``he's got round eyes'' and so like. These...
متن کاملAdversarial Discriminative Heterogeneous Face Recognition
The gap between sensing patterns of different face modalities remains a challenging problem in heterogeneous face recognition (HFR). This paper proposes an adversarial discriminative feature learning framework to close the sensing gap via adversarial learning on both raw-pixel space and compact feature space. This framework integrates cross-spectral face hallucination and discriminative feature...
متن کاملFace Recognition using PCA, Deep Face Method
The performance process of face recognition involves the inspection study of facial features in an image, recognizing those features and comparing them to one of the many faces in the database. There are many algorithms capable of performing face recognition; such as: Principal Component Analysis, Discrete Cosine Transform, 3D recognition methods, Gabor Wavelets method etc. There were many issu...
متن کاملAdversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
In this paper we show that misclassification attacks against face-recognition systems based on deep neural networks (DNNs) are more dangerous than previously demonstrated, even in contexts where the adversary can manipulate only her physical appearance (versus directly manipulating the image input to the DNN). Specifically, we show how to create eyeglasses that, when worn, can succeed in target...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Information Security and Applications
سال: 2021
ISSN: 2214-2126
DOI: 10.1016/j.jisa.2021.102874